——An in-depth interview with Jessie Zhang, Head of Incubation and Investment at ZetaChain
When the blockchain space began pondering "what's next," AI almost inevitably became a topic of discussion. But for Jessie Zhang, this is not merely a technological overlay; it is a long-term struggle concerning privacy, cognitive sovereignty, and humanity's independent thinking ability.
As a core contributor to ZetaChain, Jessie operates at the intersection of infrastructure, product incubation, investment, and long-term strategy. In 2025, after ZetaChain reached a key milestone, the decision was made to subsequently launch what seems like an "atypical blockchain product"—Anuma, a privacy-focused AI platform built around a "private memory layer."
In this interview, we started by reviewing ZetaChain's 2025 and ultimately touched upon a more fundamental question:
In a world reshaped by large models, can humanity still maintain genuine independent thought?
Starting with Cross-Chain: ZetaChain's 2025
PANews: Many people know ZetaChain from the keyword "cross-chain." How would you summarize what ZetaChain does, and what progress did you make in 2025?
Jessie:
2025 was a transition period for ZetaChain, moving from 'laying the foundation' to 'gaining momentum.'
From the beginning, we focused on interoperability. We are not just building bridges but creating a universal blockchain that can natively connect all chains. Our goal is to enable developers to avoid deploying and maintaining logic for each chain individually, instead providing comprehensive coverage for different ecosystems like Bitcoin, Ethereum, and Solana in one go.
By 2025, we had achieved native connections with multiple major public chains, supported cross-chain assets and contract calls, and saw significant growth in user scale and ecosystem projects. More importantly, ZetaChain is no longer just a "technical concept" but has been validated as infrastructure capable of supporting real applications.
But at this stage, we began asking a more complex question:
If "connecting everything" is just the means, then what are we truly aiming to protect and empower?
When AI Becomes Powerful Enough, the Risks Become Real Enough
PANews: Why did you start thinking about the integration of blockchain and AI around the middle of last year? Was there a specific trigger?
Jessie:
The core logic stems from our vigilance regarding AI development trends and our commitment to privacy principles.
First, we observed AI shifting from an "efficiency tool" to a form of "cognitive centralization," moving from "information centralization" to "cognitive centralization." Mid-last year was a tipping point. Approximately 80% of the world's high-value AI interaction data was rapidly concentrating in the hands of a very few leading platforms. In May 2025, a US court ordered OpenAI to preserve all ChatGPT user chat histories, including deleted chats and sensitive conversations recorded via its API services, which made us acutely aware of the harsh reality of this "digital centralization."
The data being captured goes far beyond simple natural language interactions; it encompasses your intentions, emotions, decision-making context, and long-term user preferences.
We have already witnessed the cost of data misuse: Facebook manipulating news feeds through algorithms, influencing presidential elections without users' awareness; travel or food delivery platforms using data for precise profiling, engaging in "price discrimination" or "big data price gouging" against frequent users, erasing fair competition behind algorithms; and even on social media, algorithms creating "information cocoons," completely fracturing the worldviews of different groups.
But large models are more insidious than these social networks or search engines. They don't just mislead you on the consumer end; they deeply介入 (intervene in) your thought process. When you ask AI for assistance in decision-making, you can hardly tell if its response is neutral or subtly manipulated by commercial interests or algorithmic biases. When a system gradually understands you better than you do yourself and begins to shape your judgment in reverse, the boundaries of independent thinking become blurred. This is the systemic risk we are concerned about.
The second reason comes from our own DNA and long-term choices.
Ankur Nandwani, a co-founder of the Brave privacy browser ($BAT), is also a core contributor to ZetaChain. Today, Brave's monthly revenue exceeds $10 million, and another privacy-focused application, the instant messaging app Signal, has reached 70 million monthly active users. These facts continually validate one point: privacy is not a niche demand but a long-underestimated, yet consistently essential,刚性需求 (rigid demand).
We have always believed that privacy in the AI era will not be diminished but will be increasingly amplified. As AI deeply介入 (intervenes in) cognition, decision-making, and behavior, data sovereignty and usage boundaries are no longer just technical issues but fundamental questions concerning individual freedom and social structure.
It is against these two backgrounds that we began systematically considering: could there be an infrastructure for LLMs that reintroduces data sovereignty, verifiability, and decentralization into the AI system without sacrificing AI capabilities? This is the fundamental reason we started thinking about blockchain and AI together.
Why Blockchain is Still the Answer
PANews: Many people might think "AI privacy" is a Web2 problem. Why do you believe blockchain is the key?
Jessie:
Among all technologies, only blockchain inherently addresses the three core challenges: ownership, tamper-proof verification, and trustless system design.
An AI company can promise "we don't do evil," but the philosophy of blockchain is: you don't even need to trust me.
We increasingly believe that privacy is not a feature but an architectural choice. Cognitive sovereignty is not guaranteed by terms but enforced by technology.
The Birth of Anuma, and Why It's Defined as a "Private Memory Layer"
Jessie:
Anuma was born precisely to solve the core paradox we just mentioned: we want AI's "intelligence" but not its "manipulation."
The current AI logic is: if you want it to be smarter and understand you better, you must feed it massive amounts of context and personal information. As interactions deepen, these fragments of information gradually piece together a "memory layer" of your true self. You can try asking an AI: "In your eyes, what kind of person am I?" You'll find its answer is often startlingly accurate. This indicates it has already grasped your mental model.
But the problem is, this "memory layer" is currently hosted on the servers of tech giants.
Anuma's starting point is not "anti-AI" but "anti-monopoly." We believe that since "memory" is an inevitable byproduct of AI evolution, it should not naturally belong to any single platform. This memory should be held and controlled by you, and you should be able to take it with you at any time.
By defining Anuma as a "private memory layer," we aim to defend the last line of cognitive sovereignty for humanity in a highly intelligent future. We want to achieve this: allow data to continuously generate value and make you more powerful, while preventing it from becoming a weapon that alienates and attacks us.
What Functions Does Anuma Have?
PANews: How would you introduce Anuma to the average person? What core functions has it currently implemented?
Jessie:
Simply put, Anuma is an AI private butler who will never tell your secrets. The most remarkable thing about this butler is that he doesn't work alone; he brings a whole team of "AI assistants" to serve you. If ChatGPT, Claude, or Gemini are like top expert assistants from different fields, then Anuma is the overall coordinator, the trusted chief butler who knows all your secrets but remains tight-lipped.
This architecture solves two core pain points in current AI usage:
-
First, it is your "cognitive firewall": All sensitive information and historical memory are locked solely with the butler (Anuma). When he dispatches the little assistants to handle tasks, he ensures your privacy is not "stolen" for model training. This means AI can become smarter, but your raw data never leaves your control.
-
Second, it is your "memory synchronizer": When you instruct the butler to switch to a different "assistant" to handle a problem, the conversation memory is completely continuous. You don't need to jump between different Apps, nor repeat your background information to every new assistant. The butler helps each assistant "sync progress"; no matter who you call upon, they always understand you like an old friend.
In terms of user experience, you see only a minimalist, natural conversation interface; but underlying it, we use technical means to reassemble the digital sovereignty originally scattered in the hands of giants and return it to the user.
Humanity's Independent Thought is Still Worth Defending
PANews: Why does your branding repeatedly emphasize "the autonomy of the human mind"?
Jessie:
We believe this is an underestimated crisis. As AI increasingly resembles a "second brain," people are prone to abandon thinking and outsource judgment.
And if this "second brain" does not truly belong to you, then you are essentially handing over your right to think to a system you cannot audit.
The bottom line Anuma wants to protect is: your memory, your context, your thought history should be your private property. Not the platform's, not the model's, and not the advertising system's.
I believe AI will become one of humanity's most important tools. But the prerequisite is that humans remain the masters of their own thoughts.
From ZetaChain to Anuma: Blockchain's Extension into the Web2 World
PANews: How do you view the relationship between ZetaChain and Anuma?
Jessie:
Anuma is the flagship落地 (landing/implementation) of ZetaChain entering its 2.0 phase and fully embracing the AI narrative. I see the relationship between the two as one where "the infrastructure drives the application, and the application defines the value": Internally, we don't simply define Anuma as a "Web3-specific" product; it is built entirely according to Web2's 2C standards and user experience. But at the same time, it is completely built on top of ZetaChain.
This is the direction we believe Web3 should take: the infrastructure becomes invisible, but its value is irreplaceable. Only when users no longer perceive the existence of the "chain" and only feel a "more usable, more secure" service will Web3 truly move towards mass adoption.
Epilogue
At the end of the interview, Jessie said: "The real danger is not how smart AI becomes, but that humans unknowingly hand over the steering wheel of decision-making without realizing it."
Perhaps, Anuma is more than just an AI product.
It is more like a reminder:
In a highly intelligent world, maintaining independent thinking is itself an ability that needs to be guarded by technology.
